59 research outputs found
RobustCLEVR: A Benchmark and Framework for Evaluating Robustness in Object-centric Learning
Object-centric representation learning offers the potential to overcome
limitations of image-level representations by explicitly parsing image scenes
into their constituent components. While image-level representations typically
lack robustness to natural image corruptions, the robustness of object-centric
methods remains largely untested. To address this gap, we present the
RobustCLEVR benchmark dataset and evaluation framework. Our framework takes a
novel approach to evaluating robustness by enabling the specification of causal
dependencies in the image generation process grounded in expert knowledge and
capable of producing a wide range of image corruptions unattainable in existing
robustness evaluations. Using our framework, we define several causal models of
the image corruption process which explicitly encode assumptions about the
causal relationships and distributions of each corruption type. We generate
dataset variants for each causal model on which we evaluate state-of-the-art
object-centric methods. Overall, we find that object-centric methods are not
inherently robust to image corruptions. Our causal evaluation approach exposes
model sensitivities not observed using conventional evaluation processes,
yielding greater insight into robustness differences across algorithms. Lastly,
while conventional robustness evaluations view corruptions as
out-of-distribution, we use our causal framework to show that even training on
in-distribution image corruptions does not guarantee increased model
robustness. This work provides a step towards more concrete and substantiated
understanding of model performance and deterioration under complex corruption
processes of the real-world
A Systematic Review of Robustness in Deep Learning for Computer Vision: Mind the gap?
Deep neural networks for computer vision are deployed in increasingly
safety-critical and socially-impactful applications, motivating the need to
close the gap in model performance under varied, naturally occurring imaging
conditions. Robustness, ambiguously used in multiple contexts including
adversarial machine learning, refers here to preserving model performance under
naturally-induced image corruptions or alterations.
We perform a systematic review to identify, analyze, and summarize current
definitions and progress towards non-adversarial robustness in deep learning
for computer vision. We find this area of research has received
disproportionately less attention relative to adversarial machine learning, yet
a significant robustness gap exists that manifests in performance degradation
similar in magnitude to adversarial conditions.
Toward developing a more transparent definition of robustness, we provide a
conceptual framework based on a structural causal model of the data generating
process and interpret non-adversarial robustness as pertaining to a model's
behavior on corrupted images corresponding to low-probability samples from the
unaltered data distribution. We identify key architecture-, data augmentation-,
and optimization tactics for improving neural network robustness. This
robustness perspective reveals that common practices in the literature
correspond to causal concepts. We offer perspectives on how future research may
mind this evident and significant non-adversarial robustness gap
- …